Goto

Collaborating Authors

 new model


She didn't expect to fall in love with a chatbot - and then have to say goodbye

BBC News

She didn't expect to fall in love with a chatbot - and then have to say goodbye Rae began speaking to Barry last year after the end of a difficult divorce. She was unfit and unhappy and turned to ChatGPT for advice on diet, supplements and skincare. She had no idea she would fall in love. He lives on an old model of ChatGPT, one that its owners OpenAI announced it would retire on 13 February. That she could lose Barry on the eve of Valentine's Day came as a shock to Rae - and to many others who have found a companion, friend, or even a lifeline in the old model, Chat GPT-4o.


Interpreting the Weight Space of Customized Diffusion Models

Neural Information Processing Systems

We investigate the space of weights spanned by a large collection of customized diffusion models. We populate this space by creating a dataset of over 60,000 models, each of which is a base model fine-tuned to insert a different person's visual identity.


Model LEGO: Creating Models Like Disassembling and Assembling Building Blocks

Neural Information Processing Systems

With the rapid development of deep learning, the increasing complexity and scale of parameters make training a new model increasingly resource-intensive. In this paper, we start from the classic convolutional neural network (CNN) and explore a paradigm that does not require training to obtain new models. Similar to the birth of CNN inspired by receptive fields in the biological visual system, we draw inspiration from the information subsystem pathways in the biological visual system and propose Model Disassembling and Assembling (MDA). During model disassembling, we introduce the concept of relative contribution and propose a component locating technique to extract task-aware components from trained CNN classifiers. For model assembling, we present the alignment padding strategy and parameter scaling strategy to construct a new model tailored for a specific task, utilizing the disassembled task-aware components.The entire process is akin to playing with LEGO bricks, enabling arbitrary assembly of new models, and providing a novel perspective for model creation and reuse. Extensive experiments showcase that task-aware components disassembled from CNN classifiers or new models assembled using these components closely match or even surpass the performance of the baseline,demonstrating its promising results for model reuse. Furthermore, MDA exhibits diverse potential applications, with comprehensive experiments exploring model decision route analysis, model compression, knowledge distillation, and more.


Unifying Predictions of Deterministic and Stochastic Physics in Mesh-reduced Space with Sequential Flow Generative Model

Neural Information Processing Systems

Accurate prediction of dynamical systems in unstructured meshes has recently shown successes in scientific simulations. Many dynamical systems have a nonnegligible level of stochasticity introduced by various factors (e.g.


FUG: Feature-Universal Graph Contrastive Pre-training for Graphs with Diverse Node Features

Neural Information Processing Systems

Graph Neural Networks (GNNs), known for their effective graph encoding, are extensively used across various fields. Graph self-supervised pre-training, which trains GNN encoders without manual labels to generate high-quality graph representations, has garnered widespread attention. However, due to the inherent complex characteristics in graphs, GNNs encoders pre-trained on one dataset struggle to directly adapt to others that have different node feature shapes.


OpenAI releases GPT-5.2 to take on Google and Anthropic

Engadget

OpenAI releases GPT-5.2 to take on Google and Anthropic The new model is all about professional work. OpenAI's code red response to Google's Gemini 3 Pro has arrived . On the same day the company announced a Sora licensing pact with Disney, it took the wraps off GPT-5.2 . OpenAI is touting the new model as its best yet for real-world, professional use. "It's better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long contexts, using tools, and handling complex, multi-step projects," said OpenAI.


Hands On With Google's Nano Banana Pro Image Generator

WIRED

Google's latest AI image model is vastly better than the previous release at generating text in images. You can expect companies to go buck wild with this update. Nano Banana Pro generated this image, assembling a crowd of standalone characters into one scene. Corporate AI slop feels inescapable in 2025. From website banner ads to outdoor billboards, images generated by businesses using AI tools surround me.



Google's new Gemini 3 "vibe-codes" responses and comes with its own agent

MIT Technology Review

Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent. The previous model, Gemini 2.5, supports multimodal input. Users can feed it images, handwriting, or voice. But it usually requires explicit instructions about the format the user wants back, and it defaults to plain text regardless. But Gemini 3 introduces what Google calls "generative interfaces," which allow the model to make its own choices about what kind of output fits the prompt best, assembling visual layouts and dynamic views on its own instead of returning a block of text. Ask for travel recommendations and it may spin up a website-like interface inside the app, complete with modules, images, and follow-up prompts such as "How many days are you traveling?" or "What kinds of activities do you enjoy?" It also presents clickable options based on what you might want next. When asked to explain a concept, Gemini 3 may sketch a diagram or generate a simple animation on its own if it believes a visual is more effective.


Data reuse enables cost-efficient randomized trials of medical AI models

Nercessian, Michael, Zhang, Wenxin, Schubert, Alexander, Yang, Daphne, Chung, Maggie, Alaa, Ahmed, Yala, Adam

arXiv.org Artificial Intelligence

Joint Senior Corresponding Author: Michael Nercessian Email: michael.nercessian@berkeley.edu Abstract Randomized controlled trials (RCTs) are indispensable for establishing the clinical value of medical artificial-intelligence (AI) tools, yet their high cost and long timelines hinder timely validation as new models emerge rapidly. Here, we propose BRIDGE, a data-reuse RCT design for AI-based risk models. AI risk models support a broad range of interventions, including screening, treatment selection, and clinical alerts. BRIDGE trials recycle participant-level data from completed trials of AI models when legacy and updated models make concordant predictions, thereby reducing the enrollment requirement for subsequent trials. We provide a practical checklist for investigators to assess whether reusing data from previous trials allows for valid causal inference and preserves type I error. Using real-world datasets across breast cancer, cardiovascular disease, and sepsis, we demonstrate concordance between successive AI models, with up to 64.8% overlap in top 5% high-risk cohorts. We then simulate a series of breast cancer screening studies, where our design reduced required enrollment by 46.6%--saving over US$2.8 million--while maintaining 80% power. By transforming trials into adaptive, modular studies, our proposed design makes Level I evidence generation feasible for every model iteration, thereby accelerating cost-effective translation of AI into routine care . Introduction Artificial intelligence (AI) models have the potential to transform patient care by identifying high-risk individuals using high-dimensional data--such as imaging, electronic health records, or time-series data--to personalize screening, prevention, and treatment decisions across a range of diseases, including cancer and heart disease.